Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
ACM International Conference Proceeding Series ; : 311-317, 2022.
Article in English | Scopus | ID: covidwho-20232081

ABSTRACT

The speech signal has numerous features that represent the characteristics of a specific language and recognize emotions. It also contains information that can be used to identify the mental, psychological, and physical states of the speaker. Recently, the acoustic analysis of speech signals offers a practical, automated, and scalable method for medical diagnosis and monitoring symptoms of many diseases. In this paper, we explore the deep acoustic features from confirmed positive and negative cases of COVID-19 and compare the performance of the acoustic features and COVID-19 symptoms in terms of their ability to diagnose COVID-19. The proposed methodology consists of the pre-trained Visual Geometry Group (VGG-16) model based on Mel spectrogram images to extract deep audio features. In addition to the K-means algorithm that determines effective features, followed by a Genetic Algorithm-Support Vector Machine (GA-SVM) classifier to classify cases. The experimental findings indicate the proposed methodology's capability to classify COVID-19 and NOT COVID-19 from acoustic features compared to COVID-19 symptoms, achieving an accuracy of 97%. The experimental results show that the proposed method remarkably improves the accuracy of COVID-19 detection over the handcrafted features used in previous studies. © 2022 ACM.

2.
Comput Biol Med ; 161: 107027, 2023 07.
Article in English | MEDLINE | ID: covidwho-2319960

ABSTRACT

The COVID-19 pandemic has highlighted a significant research gap in the field of molecular diagnostics. This has brought forth the need for AI-based edge solutions that can provide quick diagnostic results whilst maintaining data privacy, security and high standards of sensitivity and specificity. This paper presents a novel proof-of-concept method to detect nucleic acid amplification using ISFET sensors and deep learning. This enables the detection of DNA and RNA on a low-cost and portable lab-on-chip platform for identifying infectious diseases and cancer biomarkers. We show that by using spectrograms to transform the signal to the time-frequency domain, image processing techniques can be applied to achieve the reliable classification of the detected chemical signals. Transformation to spectrograms is beneficial as it makes the data compatible with 2D convolutional neural networks and helps gain significant performance improvement over neural networks trained on the time domain data. The trained network achieves an accuracy of 84% with a size of 30kB making it suitable for deployment on edge devices. This facilitates a new wave of intelligent lab-on-chip platforms that combine microfluidics, CMOS-based chemical sensing arrays and AI-based edge solutions for more intelligent and rapid molecular diagnostics.


Subject(s)
COVID-19 , Pandemics , Humans , COVID-19/diagnosis , Neural Networks, Computer , DNA , Nucleic Acid Amplification Techniques
3.
21st IEEE International Conference on Machine Learning and Applications, ICMLA 2022 ; : 1702-1707, 2022.
Article in English | Scopus | ID: covidwho-2293069

ABSTRACT

The new coronavirus disease (COVID-19), declared a pandemic on 11 March 2020 by the World Health Organization, has caused over 6 million victims worldwide. Because of the rapid spread of the virus, with the aim to perform screening we exploit deep learning model to quickly diagnose altered respiratory conditions. In this paper, we propose a method to recognize and classify cough audio files into three classes to distinguish patients with COVID-19 disease, symptomatic ones and healthy subjects, with the use of a convolutional neural network (CNN). Cough audios were recorded by using a smartphone and its built-in microphone. From cough recordings, we generate spectrogram images and we obtain an accuracy equal to 0.82 with a deep learning network developed by authors. Our method also provides heatmaps, which show the relevant input areas used by the model for the final forecast, and this aspect ensures the explainability of the method. © 2022 IEEE.

4.
IEEE/ACM Transactions on Audio Speech and Language Processing ; : 1-14, 2023.
Article in English | Scopus | ID: covidwho-2306621

ABSTRACT

The coronavirus disease 2019 (COVID-19) pandemic has drastically impacted life around the globe. As life returns to pre-pandemic routines, COVID-19 testing has become a key component, assuring that travellers and citizens are free from the disease. Conventional tests can be expensive, time-consuming (results can take up to 48h), and require laboratory testing. Rapid antigen testing, in turn, can generate results within 15-30 minutes and can be done at home, but research shows they achieve very poor sensitivity rates. In this paper, we propose an alternative test based on speech signals recorded at home with a portable device. It has been well-documented that the virus affects many of the speech production systems (e.g., lungs, larynx, and articulators). As such, we propose the use of new modulation spectral features and linear prediction analysis to characterize these changes and design a two-stage COVID-19 prediction system by fusing the proposed features. Experiments with three COVID-19 speech datasets (CSS, DiCOVA2, and Cambridge subset) show that the two-stage feature fusion system outperforms the benchmark systems of CSS and Cambridge datasets while maintaining lower complexity compared to DL-based systems. Furthermore, the two-stage system demonstrates higher generalizability to unseen conditions in a cross-dataset testing evaluation scheme. The generalizability and interpretability of our proposed system demonstrate the potential for accessible, low-cost, at-home COVID-19 testing. IEEE

5.
2022 International Conference on Data Science, Agents and Artificial Intelligence, ICDSAAI 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2261650

ABSTRACT

Clinicians have long used audio signals created by the human body as indications to diagnose sickness or track disease progression. Preliminary research indicates promise in detecting COVID-19 from voice and coughing acoustic signals. In this paper, various popular convolutional neural networks (CNN) are employed to detect COVID-19 from cough sounds available in the Coughvid opensource dataset. The CNN models are given input in the form of hand-crafted features or raw signals represented using spectrograms. The CNN architectures for both the types of inputs has been optimized to enhance performance. COVID-19 could be detected from cough sounds with an accuracy of 77.5% using CNN on handcrafted features, and 72.5% using VGG16 on spectrograms. However, result show that the concatenation of the two in a multi-head deep neural network yield higher accuracy as compared to just using hand-extracted features or spectrograms of raw signals as input. The classification improved to 81.25% when ResNet50 was employed in the multi-head deep neural network, which was higher than that obtained with VGG16 and MobileNet. © 2022 IEEE.

6.
10th International Conference on Reliability, Infocom Technologies and Optimization ,Trends and Future Directions, ICRITO 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2191926

ABSTRACT

COVID-19 has already had a significant influence on our everyday lives and with the influx of patients infected with the newer emerging variants there arises a need for a quick, accurate, and remote mode of identification. Cough sounds can play a vital role in the identification of COVID-19 in individuals. They can be used as an important factor to determine if the person is infected by COVID-19 or not, even with the prior existence of a respiratory ailment. Hence, we focused on providing a widely accessible and scalable solution through the method of a real-time mode of detection of the 'COVID cough' via a machine learning model trained 'COVID cough' recorded dataset. Based on the input, the person is provided with the diagnosis after being assessed by the model. © 2022 IEEE.

7.
Ieee Access ; 10:134785-134798, 2022.
Article in English | Web of Science | ID: covidwho-2191673

ABSTRACT

Since the beginning of the COVID-19 pandemic, the demand for unmanned aerial vehicles (UAVs) has surged owing to an increasing requirement of remote, noncontact, and technologically advanced interactions. However, with the increased demand for drones across a wide range of fields, their malicious use has also increased. Therefore, an anti-UAV system is required to detect unauthorized drone use. In this study, we propose a radio frequency (RF) based solution that uses 15 drone controller signals. The proposed method can solve the problems associated with the RF based detection method, which has poor classification accuracy when the distance between the controller and antenna increases or the signal-to-noise ratio (SNR) decreases owing to the presence of a large amount of noise. For the experiment, we changed the SNR of the controller signal by adding white Gaussian noise to SNRs of -15 to 15 dB at 5 dB intervals. A power-based spectrogram image with an applied threshold value was used for convolution neural network training. The proposed model achieved 98% accuracy at an SNR of -15 dB and 99.17% accuracy in the classification of 105 classes with 15 drone controllers within 7 SNR regions. From these results, it was confirmed that the proposed method is both noise-tolerant and scalable.

8.
2nd International Conference of Smart Systems and Emerging Technologies, SMARTTECH 2022 ; : 136-141, 2022.
Article in English | Scopus | ID: covidwho-2018985

ABSTRACT

Regarding the health-related applications in infectious respiratory/breathing diseases including COVID-19, wireless (or non-invasive) technology plays a vital role in the monitoring of breathing abnormalities. Wireless techniques are particularly important during the COVID-19 pandemic since they require the minimum level of interaction between infected individuals and medical staff. Based on recent medical research studies, COVID-19 infected individuals with the novel COVID-19-Delta variant went through rapid respiratory rate due to widespread disease in the lungs. These unpleasant circumstances necessitate instantaneous monitoring of respiratory patterns. The XeThru X4M200 ultra-wideband radar sensor is used in this study to extract vital breathing patterns. This radar sensor functions in the high and low-frequency ranges (6.0-8.5 GHz and 7.25-10.20 GHz). By performing eupnea (regular/normal) and tachypnea (irregular/rapid) breathing patterns, the data were acquired from healthy subjects in the form of spectrograms. A cutting-edge deep learning algorithm known as Residual Neural Network (ResNet) is utilised to train, validate, and test the acquired spectrograms. The confusion matrix, precision, recall, F1-score, and accuracy are exploited to evaluate the ResNet model's performance. ResNet's unique skip-connection technique minimises the underfitting/overfitting problem, providing an accuracy rate of up to 97.5%. © 2022 IEEE.

9.
3rd International Conference on Image Processing and Capsule Networks, ICIPCN 2022 ; 514 LNNS:397-410, 2022.
Article in English | Scopus | ID: covidwho-2013946

ABSTRACT

Quantum computation, particularly in the field of machine learning, is a rapidly growing technology. Major advantage of Quantum computing is its speed to perform calculations. This paper proposes a novel model architecture for feature extraction. It extracts the features from a colored spectrogram as an extension to the already existing Quanvolutional Neural Network which works on only grayscale images or 2-dimensional representation of spectrograms. The proposed model architecture works on all the three layers of an image (RGB) and uses random quantum circuits to extract features from them and distribute them into several output images and out of them one is selected which contains the most important and pertinent features from the original image helping the training of the CNN model used ahead. COVID-19 use case is used for performance evaluation. Normally the testing methods used to detect virus are expensive, examples include RT-PCR test, CT scan images. These methods require a medical professional to conduct the test while being in the proximity of the patient. Also, the testing kits once used cannot be used again. One of the most evident changes in a Covid 19 patient is the change in his/her coughing and breathing pattern. This work analyzed the spectrograms of the audio samples of coughing and breathing patterns of Covid 19 patients using the proposed model architecture and provided subsequent results. Finally to generalize our model’s applicability, the model is also run-on Alzheimer disease dataset and corresponding results are provided. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

10.
16th International Conference on Ubiquitous Information Management and Communication, IMCOM 2022 ; 2022.
Article in English | Scopus | ID: covidwho-1788736

ABSTRACT

The outbreak of coronavirus (COVID-19) resulted in numerous deaths and several significant negative impacts on many levels of human life including disruptions of schools, universities, vocational education segments, global economic recession, and increasing of poverty level [1]. Several COVID-19 diagnosis mechanisms currently appear in the scene such as Polymerase Chain Reaction (PCR) tests. The rate of false negatives for a PCR test varies depending on how long the infection has been present in the patient. Studies shows that the false-negative rate was 20% when testing was performed five days after symptoms began, but much higher (up to 100%) in earlier infection stages. Although the PCR test could be considered relatively accurate, it is also quite costly, ranging from 125 to 250 USD. Moreover, it takes the test results a long time to get out. Given the sensitivity of the situation, this delay in results would be quite risky. The aim of this research is to contribute to the discovery and analysis of COVID-19 invariants in order to assist medical diagnosis of the disease and to utilize deep learning for social good by implementing an aiding screening tool for COVID-19 testing that is accurate, cheap, and fast. Cheaper testing options such as X-ray and Computerized Tomography (CT) lung scans and cough audio records have been targeted for examination of promising results. Two Convolutional Neural Network (CNN) models were developed. One has been pre-trained with 38,000 CT and X-ray lung scans dataset to identify if the CT or X-ray lung scan is of a healthy person, COVID-19, or pneumonia patient. This model achieved an accuracy of 95.9%. Transfer learning has been applied to this model to test its generalizability beyond the given training datasets. For the second CNN model, about 2000 cough audio records have been converted into Mel-spectrograms and used to pre-train the model to identify if the cough audio Mel-spectrogram results are from a COVID-19 patient or not. This model achieved an accuracy of 82.1%. © 2022 IEEE.

11.
Journal of Earth System Science ; 131(2), 2022.
Article in English | ProQuest Central | ID: covidwho-1782951

ABSTRACT

Seismographs record earthquakes and also record various types of noise, including anthropogenic noise. In the present study, we analyse the influence of the lockdown due to COVID-19 on the ground motion at CSIR-NGRI HYB Seismological Observatory, Hyderabad. We analyse the noise recorded a week before and after the implementation of lockdown by estimating the probability density function of seismic power spectral density and by constructing the daily spectrograms. We find that at low frequency (<1 Hz), where the noise is typically dominated by naturally occurring microseismic noise, a reduction of ~2 dB for secondary microseisms (7–3 s) and at higher frequency (1–10 Hz) a reduction of ~6 dB was observed during the lockdown period. The reduction in higher frequencies corresponding to anthropogenic noise sources led to improving the SNR (signal-to-noise ratio) by a factor of 2 which is the frequency bandwidth of the microearthquakes leading to the identification of microearthquakes with Ml around 3 from epicentral distances of 180 km.

12.
Pattern Recognit ; 127: 108656, 2022 Jul.
Article in English | MEDLINE | ID: covidwho-1740084

ABSTRACT

This study presents the Auditory Cortex ResNet (AUCO ResNet), it is a biologically inspired deep neural network especially designed for sound classification and more specifically for Covid-19 recognition from audio tracks of coughs and breaths. Differently from other approaches, it can be trained end-to-end thus optimizing (with gradient descent) all the modules of the learning algorithm: mel-like filter design, feature extraction, feature selection, dimensionality reduction and prediction. This neural network includes three attention mechanisms namely the squeeze and excitation mechanism, the convolutional block attention module, and the novel sinusoidal learnable attention. The attention mechanism is able to merge relevant information from activation maps at various levels of the network. The net takes as input raw audio files and it is able to fine tune also the features extraction phase. In fact, a Mel-like filter is designed during the training, thus adapting filter banks on important frequencies. AUCO ResNet has proved to provide state of art results on many datasets. Firstly, it has been tested on many datasets containing Covid-19 cough and breath. This choice is related to the fact that that cough and breath are language independent, allowing for cross dataset tests with generalization aims. These tests demonstrate that the approach can be adopted as a low cost, fast and remote Covid-19 pre-screening tool. The net has also been tested on the famous UrbanSound 8K dataset, achieving state of the art accuracy without any data preprocessing or data augmentation technique.

13.
Forests ; 13(2):264, 2022.
Article in English | ProQuest Central | ID: covidwho-1715216

ABSTRACT

In the context of rapid urbanization, urban foresters are actively seeking management monitoring programs that address the challenges of urban biodiversity loss. Passive acoustic monitoring (PAM) has attracted attention because it allows for the collection of data passively, objectively, and continuously across large areas and for extended periods. However, it continues to be a difficult subject due to the massive amount of information that audio recordings contain. Most existing automated analysis methods have limitations in their application in urban areas, with unclear ecological relevance and efficacy. To better support urban forest biodiversity monitoring, we present a novel methodology for automatically extracting bird vocalizations from spectrograms of field audio recordings, integrating object-based classification. We applied this approach to acoustic data from an urban forest in Beijing and achieved an accuracy of 93.55% (±4.78%) in vocalization recognition while requiring less than ⅛ of the time needed for traditional inspection. The difference in efficiency would become more significant as the data size increases because object-based classification allows for batch processing of spectrograms. Using the extracted vocalizations, a series of acoustic and morphological features of bird-vocalization syllables (syllable feature metrics, SFMs) could be calculated to better quantify acoustic events and describe the soundscape. A significant correlation between the SFMs and biodiversity indices was found, with 57% of the variance in species richness, 41% in Shannon’s diversity index and 38% in Simpson’s diversity index being explained by SFMs. Therefore, our proposed method provides an effective complementary tool to existing automated methods for long-term urban forest biodiversity monitoring and conservation.

14.
2021 IEEE Biomedical Circuits and Systems Conference, BioCAS 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1699878

ABSTRACT

Corona virus disease 2019 (COVID-19) is a kind of acute infectious pneumonia that causes dyspnea and slow breathing for severe patients. Since respiratory cycle can be analyzed for auxiliary diagnosis, an automatic respiratory detection system can replace a stethoscope for measuring a patients' respiratory cycle in the isolation ward, which ensures real-time monitoring. In this paper, we propose a convolutional neural network (CNN) model that can effectively detect the cycle of breath sounds in COVID-19 patients. The Mel-spectrogram features were extracted from the data collected from hospital patients, and convolutional neural network is then used for training. After testing in different cases, the result shows that the sensitivity of this method is 90.03%, and the average accuracy is 91.32%. © 2021 IEEE.

15.
8th International Conference on Advanced Informatics: Concepts, Theory, and Application, ICAICTA 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1672704

ABSTRACT

Coronavirus disease 2019 (COVID-19) is the currently happening pandemic. Up until mid-2021, the total cases of COVID-19 have reached 171 million worldwide. The virus is mainly transmitted through droplets generated when an infected person coughs, sneezes, or exhales. The most common occurring symptoms are fever, cough, and fatigue. The current diagnosis method is done through Reverse-Transcription Polymer Chain Reaction (RT-PCR) testing. Even though this is the current gold standard, this method has several downsides. The RT-PCR is costly, time-consuming, and can lead to another infection if done improperly. In this paper we try to utilize AI to classify COVID-19 using cough sound. This method can work as a triaging tool to help prioritize a person to get future-diagnosis. In this research, our contribution is trying several feature extractions, imbalance handling and modelling techniques to classify COVID-19 using cough sound. We obtained the best result using the combination of NMF-Spectrogram feature, undersampling method, and SVM. It gives the sensitivity of 90.9%, specificity of 55.6% and overall AUC-ROC of 73.3%. We also discovered that the NMF-Spectrogram feature works better than MFCC-based features. © 2021 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL